Voice generation involves the creation of voices that best conform to supportive evidence, such as the structure of the skull, or the 3D image/scan of a face. Often, the supportive evidence may not have the information needed to generate every aspect of the voice signal, including language, content, style of delivery etc. This is especially true when the evidence is merely the image of a face or the structure of the skull. As such, there are many challenges associated with this process, each of which requires deeper consideration and different sets of approaches to address and solve.
One such challenge is that of rendering the correct intonation or prosody onto the generated (or synthesized) voice signal. Of the many solutions we have explored for this, a good one seems to be that of using a “style of rendering” or prosody from an exemplar voice sample, learned from a database of renderings by humans, and “transferring” it to the generated voice signal. Note that in this case, the goal is to simply emulate a prosody -- the content of the generated signal may be different from that of the exemplar.
In the examples below, we show the results of one such mechanism that we have devised to transfer prosody from learned exemplars to the generated signal. The first set labeled “references” shows sets of 5 exemplars derived from different databases. We attempt to then "lift" the prosody from these exemplars.
The two sets of examples that follow demonstrate the results of this process on signals that have the same linguistic content, and those that have different content (which is our final goal).
Different prosodies are automatically factorized from the training dataset. Each of these factorized "prosodies" can then be tranfered to a test set.
Utterance content: My mother always took him to the town on a market day in a light gig.
Utterance content: So we never saw Dick any more.
Utterance content: You will be to visit me in prison with a basket of provisions, you will not refuse to visit me in prison?
In the following, the linguistic content of the reference utterance is different from that of the generated utterance.